346 research outputs found

    Isolation of a flavone glucoside from Glycosmis mauritiana (Rutaceae)

    Get PDF
    AbstractFrom the ethyl acetate extract of the roots of Glycosmis mauritiana a flavone glucoside, luteolin 8-C-β-d-glucopyranoside was isolated. The structure was established by UV, IR, NMR and mass spectral studies

    Exploring the Influences of Social Media Advertising of Purchase Intention on Purchase Intention: A Test of Electronic Word of Mouth as a Mediator in Pakistan

    Get PDF
    The research aimed at analysis of the impacts of Social media advertising in creating the positive E-WOM of fast food purchase intentions for consumers in Pakistan. The objectives of the research are to evaluate the relationships of social media advertising with electronic word of mouth and purchase intention of fast food in Pakistan. The sample size of this particular research is around 384 consumers which are decided as per the guidance of Krejcie and Morgan (1970), but only 340 received survey questionnaire are included in the research. Moreover, the regression and correlation analysis are applied to analyze the collected data. The findings of the research validated the stated assumptions that SMA has a significant and positive relationship with both electronic word of mouth and purchases intention. Future research implications of the study increase the ability to manage electronic word of mouth of the fast food brands successfully with right content and generating positive attitudes to affect the purchase intentions of the consumers of fast food in Pakistan. So that, electronic word of mouth content can create information on social media that can influence purchase intentions of fast food consumers

    Volatility at Karachi Stock Exchange

    Get PDF
    Frequent “crashes” of the stock market reported during the year 1994 suggest that the Karachi bourse is rapidly converting into a volatile market. This cannot be viewed as a positive sign for this developing market of South Asia. Though heavy fluctuations in stock prices are not an unusual phenomena and it has been observed at almost all big and small exchanges of the world. Focusing on the reasons for such fluctuations is instructive and likely to have important policy implications. Proponents of the efficient market hypothesis argue that changes in stock prices are mainly dependent on the arrival of information regarding the expected returns from the stock. However, Fama (1965), French (1980), and French and Rolls (1986) observed that volatility is to some extent caused by trading itself. Portfolio insurance schemes also have the potential to increase volatility. Brady Commission’s Report provides useful insights into the effect of portfolio insurance schemes. It is interesting to note that many analysts consider the so-called “crashes” of Karachi stock market as a deliberate move to bring down prices. An attempt is made in this study to examine the effect of trading on the volatility of stock prices at Karachi Stock Exchange (KSE). Findings of the study will help understand the mechanism of the rise and fall of stock prices at the Karachi bourse

    Automatic Ground Truth Expansion for Timeline Evaluation

    Get PDF
    The development of automatic systems that can produce timeline summaries by filtering high-volume streams of text documents, retaining only those that are relevant to a particular information need (e.g. topic or event), remains a very challenging task. To advance the field of automatic timeline generation, robust and reproducible evaluation methodologies are needed. To this end, several evaluation metrics and labeling methodologies have recently been developed - focusing on information nugget or cluster-based ground truth representations, respectively. These methodologies rely on human assessors manually mapping timeline items (e.g. tweets) to an explicit representation of what information a 'good' summary should contain. However, while these evaluation methodologies produce reusable ground truth labels, prior works have reported cases where such labels fail to accurately estimate the performance of new timeline generation systems due to label incompleteness. In this paper, we first quantify the extent to which timeline summary ground truth labels fail to generalize to new summarization systems, then we propose and evaluate new automatic solutions to this issue. In particular, using a depooling methodology over 21 systems and across three high-volume datasets, we quantify the degree of system ranking error caused by excluding those systems when labeling. We show that when considering lower-effectiveness systems, the test collections are robust (the likelihood of systems being miss-ranked is low). However, we show that the risk of systems being miss-ranked increases as the effectiveness of systems held-out from the pool increases. To reduce the risk of miss-ranking systems, we also propose two different automatic ground truth label expansion techniques. Our results show that our proposed expansion techniques can be effective for increasing the robustness of the TREC-TS test collections, markedly reducing the number of miss-rankings by up to 50% on average among the scenarios tested

    Specification and Simulation of Statistical Query Algorithms for Efficiency and Noise Tolerance

    Get PDF
    AbstractA recent innovation in computational learning theory is the statistical query (SQ) model. The advantage of specifying learning algorithms in this model is that SQ algorithms can be simulated in the probably approximately correct (PAC) model, both in the absenceandin the presence of noise. However, simulations of SQ algorithms in the PAC model have non-optimal time and sample complexities. In this paper, we introduce a new method for specifying statistical query algorithms based on a type ofrelative errorand provide simulations in the noise-free and noise-tolerant PAC models which yield more efficient algorithms. Requests for estimates of statistics in this new model take the following form: “Return an estimate of the statistic within a 1±μfactor, or return ⊥, promising that the statistic is less thanθ.” In addition to showing that this is a very natural language for specifying learning algorithms, we also show that this new specification is polynomially equivalent to standard SQ, and thus, known learnability and hardness results for statistical query learning are preserved. We then give highly efficient PAC simulations of relative error SQ algorithms. We show that the learning algorithms obtained by simulating efficient relative error SQ algorithms both in the absence of noise and in the presence of malicious noise have roughly optimal sample complexity. We also show that the simulation of efficient relative error SQ algorithms in the presence of classification noise yields learning algorithms at least as efficient as those obtained through standard methods, and in some cases improved, roughly optimal results are achieved. The sample complexities for all of these simulations are based on thedνmetric, which is a type of relative error metric useful for quantities which are small or even zero. We show that uniform convergence with respect to thedνmetric yields “uniform convergence” with respect to (μ, θ) accuracy. Finally, while we show that manyspecificlearning algorithms can be written as highly efficient relative error SQ algorithms, we also show, in fact, thatallSQ algorithms can be written efficiently by proving general upper bounds on the complexity of (μ, θ) queries as a function of the accuracy parameterε. As a consequence of this result, we give general upper bounds on the complexity of learning algorithms achieved through the use of relative error SQ algorithms and the simulations described above

    General Bounds on Statistical Query Learning and PAC Learning with Noise via Hypothesis Boosting

    Get PDF
    AbstractWe derive general bounds on the complexity of learning in the statistical query (SQ) model and in the PAC model with classification noise. We do so by considering the problem of boosting the accuracy of weak learning algorithms which fall within the SQ model. This new model was introduced by Kearns to provide a general framework for efficient PAC learning in the presence of classification noise. We first show a general scheme for boosting the accuracy of weak SQ learning algorithms, proving that weak SQ learning is equivalent to strong SQ learning. The boosting is efficient and is used to show our main result of the first general upper bounds on the complexity of strong SQ learning. Since all SQ algorithms can be simulated in the PAC model with classification noise, we also obtain general upper bounds on learning in the presence of classification noise for classes which can be learned in the SQ model

    Generating, Visualizing and Evaluating High Quality Clusters for Information Organization

    Get PDF
    We present and analyze the star clustering algorithm. We discuss an implementation of this algorithm that supports browsing and document retrieval through information organization. We define three parameters for evaluating a clustering algorithm to measure the topic separation and topic aggregation achieved by the algorithm. In the absence of benchmarks, we present a method for randomly generating clustering data. Data from our user study shows evidence that the star algorithm is effective for organizing information

    Tools and algorithms to advance interactive intrusion analysis via Machine Learning and Information Retrieval

    Get PDF
    We consider typical tasks that arise in the intrusion analysis of log data from the perspectives of Machine Learning and Information Retrieval, and we study a number of data organization and interactive learning techniques to improve the analyst\u27s efficiency. In doing so, we attempt to translate intrusion analysis problems into the language of the abovementioned disciplines and to offer metrics to evaluate the effect of proposed techniques. The Kerf toolkit contains prototype implementations of these techniques, as well as data transformation tools that help bridge the gap between the real world log data formats and the ML and IR data models. We also describe the log representation approach that Kerf prototype tools are based on. In particular, we describe the connection between decision trees, automatic classification algorithms and log analysis techniques implemented in Kerf
    corecore